14 research outputs found

    An Integrated Fuzzy Multi-Criteria Decision Making Method For Supplier Evaluation

    Get PDF
    This research investigates the risk exposure arising from the supplier evaluation criteria of cost, quality, delivery, and flexibility of the supplier. Penyelidikan ini bertujuan untuk mengkaji risiko yang timbul daripada kos, kualiti, penghantaran dan fleksibiliti bagi penilaian pembekal

    Development Of Two New Auxiliary Information Control Charts, And Economic And Economic-Statistical Designs Of Several Auxiliary Information Control Charts

    Get PDF
    The use of auxiliary information (AI) concept in control charts is receiving increasing attention among researchers. Control charts with auxiliary characteristics have been shown to be more efficient than control charts without such characteristics. The salient feature of the AI concept has motivated us to develop two new AI charts. The first objective of this thesis is to develop the run sum X - AI (RS X - AI) chart for monitoring the process mean. Optimal parameters computed using the optimization algorithms developed and the step-by-step approach for constructing the optimal RS - AI chart are provided in this thesis. The average run length (ARL) and expected average run length (EARL) performance criteria are used to evaluate the performance of the RS X - AI chart. Results show that the RS X - AI chart generally surpasses the existing X - AI, synthetic X - AI and EWMA X - AI charts in the detection of outof- control signals. The second objective of this thesis is to develop the variable sampling interval exponentially weighted moving average t AI (VSI EWMA t - AI) chart for monitoring the process mean when errors in estimating the process standard deviation exist. The VSI EWMA t - AI chart allows either the short or long sampling interval to be adopted, based on information from the process quality given by the current plotting statistic of the chart

    Variable sampling interval run sum X‾ chart with estimated process parameters

    Get PDF
    The X‾ type control chart is often evaluated by assuming the process parameters are known. However, the exact values of process parameters are hardly known and thus Phase-I dataset is needed to estimate them. In this paper, the performance of the variable sampling interval run sum X‾ chart with estimated process parameters is evaluated by using the performance measure of the average of the average time to signal (AATS) and the optimal design of the proposed chart in minimizing the out-of-control AATS is developed. The performance measure of the standard deviation of the average time to signal (SDATS) is then used to identify the number of Phase-I samples (w) needed to have an in-control AATS performance close to its known process parameter case. Results show that large w is needed to minimize the performance gap between known and unknown process parameters cases of the VSI RS X‾ chart

    Variable sampling interval run sum

    No full text
    The X‾ type control chart is often evaluated by assuming the process parameters are known. However, the exact values of process parameters are hardly known and thus Phase-I dataset is needed to estimate them. In this paper, the performance of the variable sampling interval run sum X‾ chart with estimated process parameters is evaluated by using the performance measure of the average of the average time to signal (AATS) and the optimal design of the proposed chart in minimizing the out-of-control AATS is developed. The performance measure of the standard deviation of the average time to signal (SDATS) is then used to identify the number of Phase-I samples (w) needed to have an in-control AATS performance close to its known process parameter case. Results show that large w is needed to minimize the performance gap between known and unknown process parameters cases of the VSI RS X‾ chart

    An improved consensus-based group decision making model with heterogeneous information

    Full text link
    In group decision making (GDM) problems, it is natural for decision makers (DMs) to provide different preferences and evaluations owing to varying domain knowledge and cultural values. When the number of DMs is large, a higher degree of heterogeneity is expected, and it is difficult to translate heterogeneous information into one unified preference without loss of context. In this aspect, the current GDM models face two main challenges, i.e., handling the complexity pertaining to the unification of heterogeneous information from a large number of DMs, and providing optimal solutions based on unification methods. This paper presents a new consensus-based GDM model to manage heterogeneous information. In the new GDM model, an aggregation of individual priority (AIP)-based aggregation mechanism, which is able to employ flexible methods for deriving each DM\u27s individual priority and to avoid information loss caused by unifying heterogeneous information, is utilized to aggregate the individual preferences. To reach a consensus more efficiently, different revision schemes are employed to reward/penalize the cooperative/non-cooperative DMs, respectively. The temporary collective opinion used to guide the revision process is derived by aggregating only those non-conflicting opinions at each round of revision. In order to measure the consensus in a robust manner, a position-based dissimilarity measure is developed. Compared with the existing GDM models, the proposed GDM model is more effective and flexible in processing heterogeneous information. It can be used to handle different types of information with different degrees of granularity. Six types of information are exemplified in this paper, i.e., ordinal, interval, fuzzy number, linguistic, intuitionistic fuzzy set, and real number. The results indicate that the position-based consensus measure is able to overcome possible distortions of the results in large-scale GDM problems

    Optimal designs of the side sensitive synthetic chart for the coefficient of variation based on the median run length and expected median run length.

    No full text
    The side sensitive synthetic chart was proposed to improve the performance of the synthetic chart to monitor shifts in the coefficient of variation (γ), by incorporating the side sensitivity feature where successive non-conforming samples must fall on the same side of the control limits. The existing side sensitive synthetic- γ chart is only evaluated in terms of the average run length (ARL) and expected average run length (EARL). However, the run length distribution is skewed to the right, hence the actual performance of the chart may be frequently different from what is shown by the ARL and EARL. This paper evaluates the entire run length distribution by studying the percentiles of the run length distribution. It is shown that false alarms frequently happen much earlier than the in-control ARL (ARL0), and small shifts are often detected earlier compared to the ARL1. Subsequently, this paper proposes an alternative design based on the median run length (MRL) and expected median run length (EMRL). The optimal design based on the MRL shows smaller out-of-control MRL (MRL1), which shows a quicker detection of the out-of-control condition, compared to the existing design, while the results from the optimal design based on the EMRL is similar to that of the existing designs. Comparisons with the synthetic-γ chart without side sensitivity shows that side sensitivity reduces the median number of samples required to detect a shift and reduces the variability in the run length. Finally, the proposed designs are implemented on an actual industrial example
    corecore